Abstract: A burgeoning paradigm in algorithm design is algorithms with predictions, where methods can take advantage of a (possibly imperfect) prediction about the instance. While past work has focused on using predictions to improve competitive ratios, runtime, and some other performance measures, this talk considers two broader questions: (1) how to learn the predictions themselves and (2) how to apply the paradigm to improve the utility of differentially private algorithms.
For learning, we introduce a framework for provably learning predictions that works by online learning surrogate losses bounding the algorithms’ costs. We apply our approach to numerous problems, including bipartite matching, page migration, and job scheduling. In several settings we improve upon existing sample complexity results, while in others we provide the first learning-theoretic guarantees.
For privacy, we design algorithms with predictions for several important tasks, including covariance estimation and multiple quantile release. The methods’ utilities scale with natural measures of prediction quality while maintaining privacy. Our results require minimal assumptions, can incorporate robustness to noisy predictions, and extend techniques from the first part of the talk to the learning of predictions from other (potentially sensitive) datasets.
This talk is based on joint work with Nina Balcan, Ameet Talwalkar, and Sergei Vassilvitskii to appear in NeurIPS 2022, and on joint work with Kareem Amin, Travis Dick, and Sergei Vassilvitskii available as a preprint.